LIDAR点云通常通过连续旋转LIDAR传感器扫描,捕获周围环境的精确几何形状,并且对于许多自主检测和导航任务至关重要。尽管已经开发了许多3D深度体系结构,但是在分析和理解点云数据中,有效收集和大量点云的注释仍然是一个主要挑战。本文介绍了Polarmix,这是一种简单且通用的点云增强技术,但可以在不同的感知任务和场景中有效地减轻数据约束。 Polarmix通过两种跨扫描扩展策略来富含点云分布,并保留点云保真度,这些杂志沿扫描方向切割,编辑和混合点云。第一个是场景级交换,它交换了两个LiDAR扫描的点云扇区,这些扫描沿方位角轴切割。第二个是实例级旋转和粘贴,它是从一个激光雷达扫描中进行的点点实例,用多个角度旋转它们(以创建多个副本),然后将旋转点实例粘贴到其他扫描中。广泛的实验表明,Polarmix在不同的感知任务和场景中始终如一地达到卓越的性能。此外,它可以用作各种3D深度体系结构的插件,并且对于无监督的域适应性也很好。
translated by 谷歌翻译
在大量标记培训数据的监督下,视频语义细分取得了巨大进展。但是,域自适应视频分割,可以通过从标记的源域对未标记的目标域进行调整来减轻数据标记约束,这很大程度上被忽略了。我们设计了时间伪监督(TPS),这是一种简单有效的方法,探讨了从未标记的目标视频学习有效表示的一致性培训的想法。与在空间空间中建立一致性的传统一致性训练不同,我们通过在增强视频框架之间执行模型一致性来探索时空空间中的一致性训练,这有助于从更多样化的目标数据中学习。具体来说,我们设计了跨框架伪标签,以从以前的视频帧中提供伪监督,同时从增强的当前视频帧中学习。跨框架伪标签鼓励网络产生高确定性预测,从而有效地通过跨框架增强来促进一致性训练。对多个公共数据集进行的广泛实验表明,与最先进的ART相比,TPS更容易实现,更稳定,并且可以实现卓越的视频细分精度。
translated by 谷歌翻译
无监督的域适配旨在对齐标记的源域和未标记的目标域,但需要访问源数据,这些源数据通常会提高数据隐私,数据便携性和数据传输效率。我们研究无监督的模型适应(UMA),或者在没有源数据的情况下称为无监督域适应,旨在使源训练模型适应目标分布而不访问源数据的替代设置。为此,我们设计了一种创新的历史对比学习(HCL)技术,利用历史来源假设来弥补UMA中的源数据。 HCL从两个角度来解决UMA挑战。首先,它介绍了通过由当前适应的模型和历史模型产生的嵌入来对目标样本学习的历史对比实例歧视(HCID)。通过历史模型,HCID鼓励UMA学习案例鉴别的目标表示,同时保留源假设。其次,它介绍了伪标签目标样本的历史对比类别歧视(HCCD)以学习类别鉴别的目标表示。具体而言,HCCD根据当前和历史模型的预测一致重新重量伪标签。广泛的实验表明,HCL优于各种视觉任务和设置始终如一地呈现和最先进的方法。
translated by 谷歌翻译
已广泛研究从合成综合数据转移到实际数据,以减轻各种计算机视觉任务(如语义分割)中的数据注释约束。然而,由于缺乏大规模合成数据集和有效的转移方法,该研究专注于2D图像及其在3D点云分割的同行落后滞后。我们通过收集Synlidar来解决这个问题,这是一个大规模合成的LIDAR数据集,其中包含具有精确的几何形状和综合语义类的Point-Wise带注释点云。 Synlidar从​​具有丰富的场景和布局的多个虚拟环境中收集,该布局由超过190亿点的32个语义课程组成。此外,我们设计PCT,一种新型点云转换器,有效地减轻了合成和实点云之间的差距。具体地,我们将合成与实际间隙分解成外观部件和稀疏性分量,并单独处理它们,这会大大改善点云转换。我们在三次转移学习设置中进行了广泛的实验,包括数据增强,半监督域适应和无监督域适应。广泛的实验表明,Synlidar提供了用于研究3D转移的高质量数据源,所提出的PCT在三个设置上一致地实现了优越的点云平移。 Synlidar项目页面:\ url {https://github.com/xiaoaoran/synlidar}
translated by 谷歌翻译
Representing and synthesizing novel views in real-world dynamic scenes from casual monocular videos is a long-standing problem. Existing solutions typically approach dynamic scenes by applying geometry techniques or utilizing temporal information between several adjacent frames without considering the underlying background distribution in the entire scene or the transmittance over the ray dimension, limiting their performance on static and occlusion areas. Our approach $\textbf{D}$istribution-$\textbf{D}$riven neural radiance fields offers high-quality view synthesis and a 3D solution to $\textbf{D}$etach the background from the entire $\textbf{D}$ynamic scene, which is called $\text{D}^4$NeRF. Specifically, it employs a neural representation to capture the scene distribution in the static background and a 6D-input NeRF to represent dynamic objects, respectively. Each ray sample is given an additional occlusion weight to indicate the transmittance lying in the static and dynamic components. We evaluate $\text{D}^4$NeRF on public dynamic scenes and our urban driving scenes acquired from an autonomous-driving dataset. Extensive experiments demonstrate that our approach outperforms previous methods in rendering texture details and motion areas while also producing a clean static background. Our code will be released at https://github.com/Luciferbobo/D4NeRF.
translated by 谷歌翻译
The traditional statistical inference is static, in the sense that the estimate of the quantity of interest does not affect the future evolution of the quantity. In some sequential estimation problems however, the future values of the quantity to be estimated depend on the estimate of its current value. This type of estimation problems has been formulated as the dynamic inference problem. In this work, we formulate the Bayesian learning problem for dynamic inference, where the unknown quantity-generation model is assumed to be randomly drawn according to a random model parameter. We derive the optimal Bayesian learning rules, both offline and online, to minimize the inference loss. Moreover, learning for dynamic inference can serve as a meta problem, such that all familiar machine learning problems, including supervised learning, imitation learning and reinforcement learning, can be cast as its special cases or variants. Gaining a good understanding of this unifying meta problem thus sheds light on a broad spectrum of machine learning problems as well.
translated by 谷歌翻译
Machine learning-based segmentation in medical imaging is widely used in clinical applications from diagnostics to radiotherapy treatment planning. Segmented medical images with ground truth are useful for investigating the properties of different segmentation performance metrics to inform metric selection. Regular geometrical shapes are often used to synthesize segmentation errors and illustrate properties of performance metrics, but they lack the complexity of anatomical variations in real images. In this study, we present a tool to emulate segmentations by adjusting the reference (truth) masks of anatomical objects extracted from real medical images. Our tool is designed to modify the defined truth contours and emulate different types of segmentation errors with a set of user-configurable parameters. We defined the ground truth objects from 230 patient images in the Glioma Image Segmentation for Radiotherapy (GLIS-RT) database. For each object, we used our segmentation synthesis tool to synthesize 10 versions of segmentation (i.e., 10 simulated segmentors or algorithms), where each version has a pre-defined combination of segmentation errors. We then applied 20 performance metrics to evaluate all synthetic segmentations. We demonstrated the properties of these metrics, including their ability to capture specific types of segmentation errors. By analyzing the intrinsic properties of these metrics and categorizing the segmentation errors, we are working toward the goal of developing a decision-tree tool for assisting in the selection of segmentation performance metrics.
translated by 谷歌翻译
Event cameras that asynchronously output low-latency event streams provide great opportunities for state estimation under challenging situations. Despite event-based visual odometry having been extensively studied in recent years, most of them are based on monocular and few research on stereo event vision. In this paper, we present ESVIO, the first event-based stereo visual-inertial odometry, which leverages the complementary advantages of event streams, standard images and inertial measurements. Our proposed pipeline achieves temporal tracking and instantaneous matching between consecutive stereo event streams, thereby obtaining robust state estimation. In addition, the motion compensation method is designed to emphasize the edge of scenes by warping each event to reference moments with IMU and ESVIO back-end. We validate that both ESIO (purely event-based) and ESVIO (event with image-aided) have superior performance compared with other image-based and event-based baseline methods on public and self-collected datasets. Furthermore, we use our pipeline to perform onboard quadrotor flights under low-light environments. A real-world large-scale experiment is also conducted to demonstrate long-term effectiveness. We highlight that this work is a real-time, accurate system that is aimed at robust state estimation under challenging environments.
translated by 谷歌翻译
Human Activity Recognition (HAR) is one of the core research areas in mobile and wearable computing. With the application of deep learning (DL) techniques such as CNN, recognizing periodic or static activities (e.g, walking, lying, cycling, etc.) has become a well studied problem. What remains a major challenge though is the sporadic activity recognition (SAR) problem, where activities of interest tend to be non periodic, and occur less frequently when compared with the often large amount of irrelevant background activities. Recent works suggested that sequential DL models (such as LSTMs) have great potential for modeling nonperiodic behaviours, and in this paper we studied some LSTM training strategies for SAR. Specifically, we proposed two simple yet effective LSTM variants, namely delay model and inverse model, for two SAR scenarios (with and without time critical requirement). For time critical SAR, the delay model can effectively exploit predefined delay intervals (within tolerance) in form of contextual information for improved performance. For regular SAR task, the second proposed, inverse model can learn patterns from the time series in an inverse manner, which can be complementary to the forward model (i.e.,LSTM), and combining both can boost the performance. These two LSTM variants are very practical, and they can be deemed as training strategies without alteration of the LSTM fundamentals. We also studied some additional LSTM training strategies, which can further improve the accuracy. We evaluated our models on two SAR and one non-SAR datasets, and the promising results demonstrated the effectiveness of our approaches in HAR applications.
translated by 谷歌翻译
Given a natural language that describes the user's demands, the NL2Code task aims to generate code that addresses the demands. This is a critical but challenging task that mirrors the capabilities of AI-powered programming. The NL2Code task is inherently versatile, diverse and complex. For example, a demand can be described in different languages, in different formats, and at different levels of granularity. This inspired us to do this survey for NL2Code. In this survey, we focus on how does neural network (NN) solves NL2Code. We first propose a comprehensive framework, which is able to cover all studies in this field. Then, we in-depth parse the existing studies into this framework. We create an online website to record the parsing results, which tracks existing and recent NL2Code progress. In addition, we summarize the current challenges of NL2Code as well as its future directions. We hope that this survey can foster the evolution of this field.
translated by 谷歌翻译